Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译
Generalist models, which are capable of performing diverse multi-modal tasks in a task-agnostic way within a single model, have been explored recently. Being, hopefully, an alternative to approaching general-purpose AI, existing generalist models are still at an early stage, where modality and task coverage is limited. To empower multi-modal task-scaling and speed up this line of research, we release a generalist model learning system, OFASys, built on top of a declarative task interface named multi-modal instruction. At the core of OFASys is the idea of decoupling multi-modal task representations from the underlying model implementations. In OFASys, a task involving multiple modalities can be defined declaratively even with just a single line of code. The system automatically generates task plans from such instructions for training and inference. It also facilitates multi-task training for diverse multi-modal workloads. As a starting point, we provide presets of 7 different modalities and 23 highly-diverse example tasks in OFASys, with which we also develop a first-in-kind, single model, OFA+, that can handle text, image, speech, video, and motion data. The single OFA+ model achieves 95% performance in average with only 16% parameters of 15 task-finetuned models, showcasing the performance reliability of multi-modal task-scaling provided by OFASys. Available at https://github.com/OFA-Sys/OFASys
translated by 谷歌翻译
The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
In this paper, we propose a novel multi-modal multi-task encoder-decoder pre-training framework (MMSpeech) for Mandarin automatic speech recognition (ASR), which employs both unlabeled speech and text data. The main difficulty in speech-text joint pre-training comes from the significant difference between speech and text modalities, especially for Mandarin speech and text. Unlike English and other languages with an alphabetic writing system, Mandarin uses an ideographic writing system where character and sound are not tightly mapped to one another. Therefore, we propose to introduce the phoneme modality into pre-training, which can help capture modality-invariant information between Mandarin speech and text. Specifically, we employ a multi-task learning framework including five self-supervised and supervised tasks with speech and text data. For end-to-end pre-training, we introduce self-supervised speech-to-pseudo-codes (S2C) and phoneme-to-text (P2T) tasks utilizing unlabeled speech and text data, where speech-pseudo-codes pairs and phoneme-text pairs are a supplement to the supervised speech-text pairs. To train the encoder to learn better speech representation, we introduce self-supervised masked speech prediction (MSP) and supervised phoneme prediction (PP) tasks to learn to map speech into phonemes. Besides, we directly add the downstream supervised speech-to-text (S2T) task into the pre-training process, which can further improve the pre-training performance and achieve better recognition results even without fine-tuning. Experiments on AISHELL-1 show that our proposed method achieves state-of-the-art performance, with a more than 40% relative improvement compared with other pre-training methods.
translated by 谷歌翻译
通过快速梯度符号方法(FGSM)生成的样品(也称为FGSM-AT)生成的样品是一种计算上的简单方法,可以训练训练强大的网络。然而,在训练过程中,在Arxiv:2001.03994 [CS.LG]中发现了一种不稳定的“灾难性过度拟合”模式,在单个训练步骤中,强大的精度突然下降到零。现有方法使用梯度正规化器或随机初始化技巧来减轻此问题,而它们要么承担高计算成本或导致较低的稳健精度。在这项工作中,我们提供了第一项研究,该研究从三个角度彻底研究了技巧的集合:数据初始化,网络结构和优化,以克服FGSM-AT中的灾难性过度拟合。令人惊讶的是,我们发现简单的技巧,即a)掩盖部分像素(即使没有随机性),b)设置较大的卷积步幅和平滑的激活功能,或c)正规化第一卷积层的重量,可以有效地应对过度拟合问题。对一系列网络体系结构的广泛结果验证了每个提出的技巧的有效性,还研究了技巧的组合。例如,在CIFAR-10上接受了PREACTRESNET-18培训,我们的方法对PGD-50攻击者的准确性为49.8%,并且针对AutoAttack的精度为46.4%,这表明Pure FGSM-AT能够启用健壮的学习者。代码和模型可在https://github.com/ucsc-vlaa/bag-of-tricks-for-for-fgsm-at上公开获得。
translated by 谷歌翻译
本文研究了从预先训练的模型,尤其是蒙面自动编码器中提取知识的潜力。我们的方法很简单:除了优化掩盖输入的像素重建损失外,我们还将教师模型的中间特征图与学生模型的中间特征图之间的距离最小化。此设计导致一个计算高效的知识蒸馏框架,给定1)仅使用一个少量可见的补丁子集,2)(笨拙的)教师模型仅需要部分执行,\ ie,\ ie,在前几个中,向前传播输入层,用于获得中间特征图。与直接蒸馏微型模型相比,提炼预训练的模型显着改善了下游性能。例如,通过将知识从MAE预先训练的VIT-L提炼为VIT-B,我们的方法可实现84.0%的Imagenet Top-1精度,表现优于直接将微型VIT-L蒸馏的基线,降低1.2%。更有趣的是,我们的方法即使具有极高的掩盖率也可以从教师模型中进行鲁棒性蒸馏:例如,在蒸馏过程中仅可见十个斑块,我们的VIT-B具有竞争力的前1个Imagenet精度为83.6%,在95%的掩盖率中,只有十个斑块。 ;令人惊讶的是,它仍然可以通过仅四个可见斑(98%的掩盖率)积极训练来确保82.4%的Top-1 Imagenet精度。代码和模型可在https://github.com/ucsc-vlaa/dmae上公开获得。
translated by 谷歌翻译
虽然微调预训练的网络已成为训练图像分割模型的流行方式,但这种用于图像分割的骨干网络经常使用图像分类源数据集(例如ImageNet)进行预训练。尽管图像分类数据集可以为骨干网络提供丰富的视觉特征和歧视能力,但它们无法以端到端的方式完全预训练目标模型(即骨干+分割模块)。由于分类数据集中缺乏分割标签,因此在微调过程中进行分割模块在微调过程中随机初始化。在我们的工作中,我们提出了一种利用伪语义分割标签(PSSL)的方法,以启用基于分类数据集的图像分割模型的端到端预训练。 PSSL的启发是受到观察的启发,即通过CAM,Smoothgrad和Lime等解释算法获得的分类模型的解释结果将接近视觉对象的像素簇。具体而言,通过解释分类结果并汇总了从多个分类器查询的解释集合来降低单个模型引起的偏差,从而为每个图像获得PSSL。使用PSSL,对于ImageNet的每个图像,提出的方法都利用加权分割学习程序来预先培训分割网络。实验结果表明,在Imagenet伴随PSSL作为源数据集的情况下,提出的端到端预训练策略成功地增强了各种分割模型的性能,即PSPNET-RESNET50,DEEPLABV3-RESNET50和OCRNET-HRNET-HRNETENET-HRNETENET-HRNETENET-HRNETENET-HRNETW18,和在许多细分任务上,例如CAMVID,VOC-A,VOC-C,ADE20K和CityScapes,并有重大改进。源代码可在https://github.com/paddlepaddle/paddleseg上使用。
translated by 谷歌翻译
时空视频接地(STVG)是一项具有挑战性的任务,旨在根据自然语言查询在语义上在语义上定位感兴趣的对象的时空管。大多数以前的作品不仅严重依赖于更快的R-CNN提取的锚盒,而且还简单地将视频视为一系列单独的帧,因此缺乏其时间建模。取而代之的是,在本文中,我们是第一个为STVG提出的无锚框架的人,称为Gaussian基于高斯内核的交叉模态网络(GKCMN)。具体而言,我们利用每个视频框架的基于高斯内核的热图来定位与查询相关的对象。混合的串行和并行连接网络进一步开发,以利用框架之间的空间和时间关系以更好地接地。VIDSTG数据集的实验结果证明了我们提出的GKCMN的有效性。
translated by 谷歌翻译
败血症是ICU死亡的主要原因。这是一种需要在短时间内进行复杂干预措施的疾病,但其最佳治疗策略仍然不确定。证据表明,当前使用的治疗策略的实践是有问题的,可能对患者造成伤害。为了解决这个决策问题,我们提出了一个基于历史数据的新医疗决策模型,以帮助临床医生建议实时治疗的最佳参考选项。我们的模型将离线强化学习与深入的强化学习结合在一起,以解决医疗保健中传统的强化学习无法与环境互动的问题,从而使我们的模型能够在连续的国家行动空间中做出决策。我们证明,平均而言,模型推荐的治疗方法比临床医生建议的治疗更有价值和可靠。在大型验证数据集中,我们发现临床医生实际剂量与AI的决定相匹配的患者的死亡率最低。我们的模型为败血症提供了个性化的,可解释的治疗决策,可以改善患者护理。
translated by 谷歌翻译
视觉变形金刚最近的成功是在图像识别中挥舞着卷积神经网络(CNN)的长期优势。具体而言,就稳健性而言,最近的研究发现,无论训练设置如何,变压器本质上比CNN更强大。此外,人们认为,变形金刚的这种优越性应该在很大程度上被认为是他们的自我注意力型建筑本身。在本文中,我们通过密切研究变压器的设计来质疑这种信念。我们的发现导致了三种高效的体系结构设计,以提高鲁棒性,但很简单,可以在几行代码中实现,即a)修补输入图像,b)扩大内核大小,c)降低激活层和归一化层。将这些组件融合在一起,我们能够构建纯CNN体系结构,而没有任何类似注意力的操作,这些操作比变形金刚更强大,甚至更健壮。我们希望这项工作可以帮助社区更好地了解强大的神经体系结构的设计。该代码可在https://github.com/ucsc-vlaa/robustcnn上公开获得。
translated by 谷歌翻译